82 research outputs found

    Hatékony rendszer-szintƱ hatåsanalízis módszerek és alkalmazåsuk a szoftverfejlesztés folyamatåban = Efficient whole-system impact analysis methods with applications in software development

    Get PDF
    Szoftver hatĂĄsanalĂ­zis sorĂĄn a rendszer megvĂĄltoztatĂĄsĂĄnak következmĂ©nyeit becsĂŒljĂŒk, melynek fontos alkalmazĂĄsai vannak pĂ©ldĂĄul a vĂĄltoztatĂĄs-propagĂĄlĂĄs, költsĂ©gbecslĂ©s, szoftverminƑsĂ©g Ă©s tesztelĂ©s terĂŒletĂ©n. A kutatĂĄs sorĂĄn olyan hatĂĄsanalĂ­zis mĂłdszereket dolgoztunk ki, melyek hatĂ©konyan Ă©s sikeresen alkalmazhatĂłk nagymĂ©retƱ Ă©s heterogĂ©n architektĂșrĂĄjĂș, valĂłs alkalmazĂĄsok esetĂ©ben is. A korĂĄbban rendelkezĂ©sre ĂĄllĂł mĂłdszerek csak korlĂĄtozott mĂ©retben Ă©s környezetekben voltak kĂ©pesek eredmĂ©nyt szolgĂĄltatni. A meglĂ©vƑ statikus Ă©s dinamikus programszeletelĂ©s Ă©s fĂŒggƑsĂ©g elemzĂ©si algoritmusok tovĂĄbbfejlesztĂ©se mellett szĂĄmos kapcsolĂłdĂł terĂŒleten Ă©rtĂŒnk el eredmĂ©nyeket Ășgy, mint fĂŒggƑsĂ©gek metrikĂĄkkal törtĂ©nƑ vizsgĂĄlata, fogalmi csatolĂĄs kutatĂĄsa, minƑsĂ©gi modellek, hiba- Ă©s produktivitĂĄs elƑrejelzĂ©s. Ezen terĂŒleteknek a mĂłdszerek gyakorlatban törtĂ©nƑ alkalmazĂĄsĂĄban van jelentƑsĂ©ge. SpeciĂĄlis technolĂłgiĂĄkra koncentrĂĄlva ĂșjszerƱ eredmĂ©nyek szĂŒlettek, pĂ©ldĂĄul adatbĂĄzis rendszerek vagy alacsony szintƱ nyelvek esetĂ©ben. A hatĂĄsanalĂ­zis mĂłdszerek alkalmazĂĄsai terĂ©n kidolgoztunk ĂșjszerƱ mĂłdszereket a tesztelĂ©s optimalizĂĄlĂĄsa, teszt lefedettsĂ©g mĂ©rĂ©s, -priorizĂĄlĂĄs Ă©s vĂĄltozĂĄs propagĂĄlĂĄs terĂŒleteken. A kidolgozott mĂłdszerek alapjĂĄt kĂ©peztĂ©k tovĂĄbbi projekteknek, melyek sorĂĄn szoftvertermĂ©keket is kiegĂ©szĂ­tettek mĂłdszereink alapjĂĄn. | During software change impact analysis, we assess the consequences of changes made to a software system, which has important applications in, for instance, change propagation, cost estimation, software quality and testing. We developed impact analysis methods that can be effectively and efficiently used for large and heterogeneous real life applications as well. Previously available methods could provide results only in limited environments and for systems of limited size. Apart from the enhancements developed for the existing static and dynamic slicing and dependence analysis algorithms, we achieved results in different related areas such as investigation of dependences based on metrics, conceptual coupling, quality models and prediction of defects and productivity. These areas mostly support the application of the methods in practice. We have contributions in the fields of different special technologies, for instance, dependences in database systems or analysis of low level languages. Regarding the applications of impact analysis, we developed novel methods for test optimization, test coverage measurement and prioritization, and change propagation. The developed methods provided basis for further projects, also for extension of certain software products

    Using contextual knowledge in interactive fault localization

    Get PDF
    Tool support for automated fault localization in program debugging is limited because state-of-the-art algorithms often fail to provide efficient help to the user. They usually offer a ranked list of suspicious code elements, but the fault is not guaranteed to be found among the highest ranks. In Spectrum-Based Fault Localization (SBFL) – which uses code coverage information of test cases and their execution outcomes to calculate the ranks –, the developer has to investigate several locations before finding the faulty code element. Yet, all the knowledge she a priori has or acquires during this process is not reused by the SBFL tool. There are existing approaches in which the developer interacts with the SBFL algorithm by giving feedback on the elements of the prioritized list. We propose a new approach called iFL which extends interactive approaches by exploiting contextual knowledge of the user about the next item in the ranked list (e. g., a statement), with which larger code entities (e. g., a whole function) can be repositioned in their suspiciousness. We implemented a closely related algorithm proposed by Gong et al. , called Talk . First, we evaluated iFL using simulated users, and compared the results to SBFL and Talk . Next, we introduced two types of imperfections in the simulation: user’s knowledge and confidence levels. On SIR and Defects4J, results showed notable improvements in fault localization efficiency, even with strong user imperfections. We then empirically evaluated the effectiveness of the approach with real users in two sets of experiments: a quantitative evaluation of the successfulness of using iFL , and a qualitative evaluation of practical uses of the approach with experienced developers in think-aloud sessions

    Uncovering Dependence Clusters and Linchpin Functions

    Get PDF
    Dependence clusters are (maximal) collections of mutually dependent source code entities according to some dependence relation. Their presence in software complicates many maintenance activities including testing, refactoring, and feature extraction. Despite several studies finding them common in production code, their formation, identification, and overall structure are not well understood, partly because of challenges in approximating true dependences between program entities. Previous research has considered two approximate dependence relations: a fine-grained statement-level relation using control and data dependences from a program’s System Dependence Graph and a coarser relation based on function-level controlflow reachability. In principal, the first is more expensive and more precise than the second. Using a collection of twenty programs, we present an empirical investigation of the clusters identified by these two approaches. In support of the analysis, we consider hybrid cluster types that works at the coarser function-level but is based on the higher-precision statement-level dependences. The three types of clusters are compared based on their slice sets using two clustering metrics. We also perform extensive analysis of the programs to identify linchpin functions – functions primarily responsible for holding a cluster together. Results include evidence that the less expensive, coarser approaches can often be used as eïżœective proxies for the more expensive, finer-grained approaches. Finally, the linchpin analysis shows that linchpin functions can be eïżœectively and automatically identified
    • 

    corecore